A. Probability Background
\[ \def\E{\mathsf E} \def\F{\mathcal F} \def\P{\mathsf P} \def\R{\mathbb R} \]
Introduction | • | Stochastic processes | General examples | Incurred loss processes
1 Sigma Algebra Filtrations
1.1 Sigma Algebras and Random Variables
- Definitions and Basics:
- Sigma Algebra \(\mathcal{F}\): A collection of subsets of a given set (like \([0, 1]\)) that includes the entire set, is closed under the complement and countable unions of its subsets.
- Random Variable \(X\): A function from a probability space into a measurable space that is measurable with respect to a sigma algebra, typically the Borel sigma algebra on the real line, which we always assume.
- Generating Sigma Algebras:
- Generated Sigma Algebra \(\sigma(X)\): The smallest sigma algebra for which the random variable \(X\) is measurable. It includes all preimages of Borel sets under \(X\).
- Example: The identity function \(X(\omega) = \omega\) on \([0, 1]\) generates the Borel sigma algebra on \([0, 1]\) because its preimages are the Borel sets themselves.
- Properties of Random Variables and Generated Algebras:
- Continuous and injective functions like \(X(\omega) = \omega\) or any other measurable bijective function from \([0, 1]\) to \([0, 1]\) will generate the full Borel sigma algebra.
- Functions like \(Y = \operatorname{frac}(2\omega)=\{2x\}=x -\lfloor x\rfloor\), which are not injective, generate a smaller (but infinite) sigma algebra because they map multiple distinct inputs to the same output, failing to distinguish between these different cases.
1.2 Filtrations and Information Growth
- Filtration \(\mathcal{F}_t\):
- A non-decreasing sequence of sub-sigma algebras of \(\mathcal{F}\), typically representing the information available up to time \(t\).
- Connection with Random Variables:
- Often, \(\mathcal{F}_t\) can be thought of as being generated by a sequence of random variables \(X_t\), where \(\mathcal{F}_t = \sigma(X_s : s \leq t)\). This sequence \(X_t\) converges to a random variable \(X\), encapsulating an increasing amount of information as \(t\) progresses.
1.3 Sub-Sigma Algebras and Their Complexities
- Finite and Infinite Sub-Sigma Algebras:
- Finite sub-sigma algebras are generated by random variables taking finitely many values, typically constant on the atoms of the sigma algebra.
- Infinite sub-sigma algebras, particularly those with no non-point atoms, might be generated by continuous functions but exhibit more complexity, often depending on subtle measure-theoretic properties.
- Examples of Complex Sigma Algebras:
- Completion Issues: The completion of a Borel sigma algebra under a measure includes all null sets. The original algebra, as a subset of the completion, cannot be generated by any single measurable function. (Consider \(\mathcal{F}\) as the Borel sigma algebra on \([0, 1]\) and \(G\) as the completion of \(\mathcal{F}\) under the Lebesgue measure. While \(\mathcal{F}\) itself can be generated by the identity function \(X(\omega) = \omega\), its completion \(G\) includes all null sets (sets of measure zero) in addition to the Borel sets. This extension includes subsets of Borel sets of measure zero that are not Borel themselves, which cannot be generated by any single Borel-measurable function \(Y\).)
- Non-Measurable Sets: Including non-measurable sets (e.g., Vitali sets) leads to sigma algebras that cannot be represented as \(\sigma(Y)\) for any measurable random variable. (If \(G\) includes certain non-measurable sets (relative to a given measure), such as those constructed via the axiom of choice (like Vitali sets), then \(G\) cannot be represented as \(\sigma(Y)\) for any measurable random variable \(Y\), since measurability of \(Y\) would imply that all sets in \(\sigma(Y)\) are measurable.)
- Tail Sigma Algebras: These capture events invariant under changes in a finite initial segment of a sequence and are not typically generated by any finite sequence of observations.
- Product Spaces: Consider a product space with a sigma algebra generated by a complex array of interdependent random variables. The sigma algebra of this space may involve intricate dependencies that cannot be captured by any single random variable without losing the rich structure of interactions between the components.
2 Uniformly Integrable
Uniform integrability is an important concept in probability theory, particularly in the contexts of limit theorems, stochastic integration, and martingale theory. It provides a condition that strengthens convergence results, such as ensuring that convergence in distribution implies convergence of expectations.
2.1 Definition
A family of random variables \(\{X_\alpha\}\) indexed by \(\alpha\) in some index set \(A\) is called uniformly integrable if the following condition is satisfied:
Integrability Condition: For all \(\alpha\), \[ \mathbb{E}[|X_\alpha|] < \infty. \]
Uniform Convergence Condition: The family \(\{X_\alpha\}\) must satisfy \[ \lim_{c \to \infty} \sup_{\alpha \in A} \mathbb{E}[|X_\alpha| \mathbf{1}_{\{|X_\alpha| > c\}}] = 0 \] This condition states that the expected values of the portions of \(X_\alpha\) exceeding any large threshold \(c\) become uniformly negligible as \(c\) increases.
Uniform integrability means that for all \(\epsilon > 0\), there exists a threshold \(C > 0\) such that for all \(c > C\) and for all random variables \(X_\alpha\) in the family, the expected value of \(|X_\alpha|\) over the event that \(|X_\alpha| > c\) is less than \(\epsilon\).
Uniform integrability ensures that the tails of all the random variables in the family become negligible uniformly as the threshold \(c\) increases. This guarantees that the contribution to the total expectation from extreme values (tails) is uniformly small, and the influence of these tails on the convergence properties or expectation calculations is minimal across the entire family.
2.2 Explanation and Importance:
Purpose: Uniform integrability controls the “tails” of the random variables in the family. It ensures that no individual random variable in the family has a tail heavy enough to significantly affect the average behavior as the threshold \(c\) grows very large.
Practical Implications: In the context of converging sequences of random variables, if a sequence \(\{X_n\}\) converges in distribution to some random variable \(X\) and \(\{X_n\}\) is uniformly integrable, then \(\{X_n\}\) also converges in the \(L^1\) sense to \(X\). This implies: \[ \lim_{n \to \infty} \E[|X_n - X|] = 0. \] Thus, uniform integrability is crucial for exchanging limits and expectations, a common requirement in advanced probability and statistical applications.
Relation to Martingales: For martingale sequences, uniform integrability is a key condition for the martingale convergence theorem, which states that a uniformly integrable martingale converges almost surely and in \(L^1\) to a limit random variable.
Testing Uniform Integrability: While the formal definition involves checking the uniform limit condition, in practice, uniform integrability can often be assessed by demonstrating that the family of random variables is bounded in \(L^p\) for some \(p > 1\). That is, showing \(\sup_{\alpha \in A} \E[|X_\alpha|^p] < \infty\) for some \(p > 1\) typically implies uniform integrability.
UI implies uniformly bounded:
A family of random variables \(\{X_i\}_{i \in I}\) is uniformly integrable if for every \(\epsilon > 0\), there exists a \(\delta > 0\) such that for any event \(A\) with \(\mathbb{P}(A) < \delta\), it holds that \[ \sup_{i \in I} \int_A |X_i| \, d\mathbb{P} < \epsilon. \] Additionally, uniform integrability also requires that \[ \sup_{i \in I} \int_{\{|X_i| > K\}} |X_i| \, d\mathbb{P} \rightarrow 0 \quad \text{as} \quad K \rightarrow \infty. \]
Bounded \(L^1\) Norm: To prove that the \(L^1\) norm (\(\mathbb{E}[|X_i|]\)) of the family is bounded, consider the definition: \[ \mathbb{E}[|X_i|] = \int_0^\infty \mathbb{P}(|X_i| > t) \, dt. \] Since the family is uniformly integrable, for any \(\epsilon > 0\), there exists \(K > 0\) such that \[ \sup_{i \in I} \int_{\{|X_i| > K\}} |X_i| \, d\mathbb{P} < \epsilon. \] Therefore, the tails of the integrals \(\int_K^\infty \mathbb{P}(|X_i| > t) \, dt\) are uniformly small. This implies that the integrals are dominated by a fixed amount up to \(K\) plus an arbitrarily small \(\epsilon\), leading to a bounded expectation.
Implication of Bounded \(L^1\) Norm: Uniform integrability restricts the growth of the expectations of the absolute values of the family members, ensuring they are bounded by some \(M < \infty\): \[ \sup_{i \in I} \mathbb{E}[|X_i|] \leq M. \]
This boundedness is crucial in various results in probability theory, particularly in proving convergence theorems like Vitali’s convergence theorem, which extends the dominated convergence theorem to the case of uniformly integrable families of random variables.
[Meyer p.18] If integrable \(X_n\) converges a.e. to \(X\) then \(X\) is integrable and convergence is \(L^1\) iff \(X_n\) are UI. If \(X_n\) are positive they are UI iff \(\lim \E[X_n] = \E[X] < \infty\)>
3 Square-Integrable
A stochastic process \(M = \{M_t, t \geq 0\}\) is a square integrable martingale if:
- It is a margingale (and hence integrable)
- Uniform Square Integrability: \[ \sup_{t \geq 0} \mathbb{E}[M_t^2] < \infty. \] This stronger condition ensures that not only is each \(M_t\) square integrable, but the supremum of these second moments over all time is also finite.
Introduction | • | Stochastic processes | General examples | Incurred loss processes